Two-level stochastic optimization formulations have become instrumental in a number of machine learning contexts such as continual learning, neural architecture search, adversarial learning, and hyperparameter tuning. Practical stochastic bilevel optimization problems become challenging in optimization or learning scenarios where the number of variables is high or there are constraints. In this paper, we introduce a bilevel stochastic gradient method for bilevel problems with lower-level constraints. We also present a comprehensive convergence theory that covers all inexact calculations of the adjoint gradient (also called hypergradient) and addresses both the lower-level unconstrained and constrained cases. To promote the use of bilevel optimization in large-scale learning, we introduce a practical bilevel stochastic gradient method (BSG-1) that does not require second-order derivatives and, in the lower-level unconstrained case, dismisses any system solves and matrix-vector products.
translated by 谷歌翻译
In this work we introduce reinforcement learning techniques for solving lexicographic multi-objective problems. These are problems that involve multiple reward signals, and where the goal is to learn a policy that maximises the first reward signal, and subject to this constraint also maximises the second reward signal, and so on. We present a family of both action-value and policy gradient algorithms that can be used to solve such problems, and prove that they converge to policies that are lexicographically optimal. We evaluate the scalability and performance of these algorithms empirically, demonstrating their practical applicability. As a more specific application, we show how our algorithms can be used to impose safety constraints on the behaviour of an agent, and compare their performance in this context with that of other constrained reinforcement learning algorithms.
translated by 谷歌翻译
Overfitting is a problem in Convolutional Neural Networks (CNN) that causes poor generalization of models on unseen data. To remediate this problem, many new and diverse data augmentation methods (DA) have been proposed to supplement or generate more training data, and thereby increase its quality. In this work, we propose a new data augmentation algorithm: VoronoiPatches (VP). We primarily utilize non-linear recombination of information within an image, fragmenting and occluding small information patches. Unlike other DA methods, VP uses small convex polygon-shaped patches in a random layout to transport information around within an image. Sudden transitions created between patches and the original image can, optionally, be smoothed. In our experiments, VP outperformed current DA methods regarding model variance and overfitting tendencies. We demonstrate data augmentation utilizing non-linear re-combination of information within images, and non-orthogonal shapes and structures improves CNN model robustness on unseen data.
translated by 谷歌翻译
Neural networks have revolutionized the area of artificial intelligence and introduced transformative applications to almost every scientific field and industry. However, this success comes at a great price; the energy requirements for training advanced models are unsustainable. One promising way to address this pressing issue is by developing low-energy neuromorphic hardware that directly supports the algorithm's requirements. The intrinsic non-volatility, non-linearity, and memory of spintronic devices make them appealing candidates for neuromorphic devices. Here we focus on the reservoir computing paradigm, a recurrent network with a simple training algorithm suitable for computation with spintronic devices since they can provide the properties of non-linearity and memory. We review technologies and methods for developing neuromorphic spintronic devices and conclude with critical open issues to address before such devices become widely used.
translated by 谷歌翻译
In recent years, deep-learning-based approaches have been introduced to solving time-series forecasting-related problems. These novel methods have demonstrated impressive performance in univariate and low-dimensional multivariate time-series forecasting tasks. However, when these novel methods are used to handle high-dimensional multivariate forecasting problems, their performance is highly restricted by a practical training time and a reasonable GPU memory configuration. In this paper, inspired by a change of basis in the Hilbert space, we propose a flexible data feature extraction technique that excels in high-dimensional multivariate forecasting tasks. Our approach was originally developed for the National Science Foundation (NSF) Algorithms for Threat Detection (ATD) 2022 Challenge. Implemented using the attention mechanism and Convolutional Neural Networks (CNN) architecture, our method demonstrates great performance and compatibility. Our models trained on the GDELT Dataset finished 1st and 2nd places in the ATD sprint series and hold promise for other datasets for time series forecasting.
translated by 谷歌翻译
We develop a wall model for large-eddy simulation (LES) that takes into account various pressure-gradient effects using multi-agent reinforcement learning (MARL). The model is trained using low-Reynolds-number flow over periodic hills with agents distributed on the wall along the computational grid points. The model utilizes a wall eddy-viscosity formulation as the boundary condition, which is shown to provide better predictions of the mean velocity field, rather than the typical wall-shear stress formulation. Each agent receives states based on local instantaneous flow quantities at an off-wall location, computes a reward based on the estimated wall-shear stress, and provides an action to update the wall eddy viscosity at each time step. The trained wall model is validated in wall-modeled LES (WMLES) of flow over periodic hills at higher Reynolds numbers, and the results show the effectiveness of the model on flow with pressure gradients. The analysis of the trained model indicates that the model is capable of distinguishing between the various pressure gradient regimes present in the flow.
translated by 谷歌翻译
当国家行动对具有等效的奖励和过渡动态时,动物能够从有限的经验中迅速推断出来。另一方面,现代的强化学习系统必须通过反复试验进行艰苦的学习,以使国家行动对相当于价值 - 需要从其环境中进行过多的大量样本。已经提出了MDP同态,将观察到的环境的MDP降低到抽象的MDP,这可以实现更有效的样本策略学习。因此,当可以先验地构建合适的MDP同构时,已经实现了样本效率的令人印象深刻的提高 - 通常是通过利用执业者对环境对称性的知识来实现​​的。我们提出了一种在离散作用空间中构建同态的新方法,该方法使用部分环境动力学模型来推断哪种状态作用对导致同一状态 - 将状态行动空间的大小减少了一个等于动作空间的基数。我们称此方法等效效果抽象。在GridWorld环境中,我们从经验上证明了等效效果抽象可以提高基于模型的方法的无模型设置和计划效率的样品效率。此外,我们在Cartpole上表明,我们的方法的表现优于学习同构的现有方法,同时使用33倍的培训数据。
translated by 谷歌翻译
腿部机器人运动是一项艰巨的任务,这是由于无数的子问题,例如脚接触的混合动力学以及所需步态对地形的影响。对浮动基础和脚关节的准确和高效的状态估计可以通过向机器人控制器提供反馈信息来帮助减轻这些问题的许多问题。当前的状态估计方法高度依赖于视觉和惯性测量的结合,以提供实时估计,从而在感知上较差的环境中残障。在这项工作中,我们表明,通过通过因子图公式利用机器人的运动学链模型,我们可以使用主要的特性惯性数据对基础和腿关节进行状态估计。我们使用基于因子图形的框架中的预先集成IMU测量,正向运动计算和接触检测的组合进行状态估计,从而使我们的状态估计值受到机器人模型的约束。模拟和硬件上的实验结果表明,我们的方法平均超过当前的本体感受状态估计方法27%,同时可以推广到各种腿部机器人平台。我们在各种轨迹上定量和定性地展示了我们的结果。
translated by 谷歌翻译
自1970年代初以来,已经开发并改进了质谱仪和不连贯的散射雷达(MSIS)模型家族。 MSI的最新版本是海军研究实验室(NRL)MSIS 2.0经验大气模型。 NRLMSIS 2.0提供物种密度,质量密度和温度估计作为位置和空间天气条件的功能。长期以来,MSIS模型一直是研究和运营社区中的大气模型的流行选择,但与许多模型一样,并未提供不确定性估计。在这项工作中,我们开发了基于机器学习(ML)的外层温度模型,该模型可与NRLMSIS 2.0一起使用,以相对于高保真卫星密度估计值校准其。我们的模型(称为MSIS-UQ)没有提供点估计,而是输出一个分布,该分布将使用称为校准误差评分的度量进行评估。我们表明,MSIS-UQ的DEMIAS nRLMSIS 2.0导致模型和卫星密度之间的差异减少25%,并且比太空力量的高精度卫星阻力模型更接近卫星密度。我们还通过生成物种密度,质量密度和温度的高度曲线来显示模型的不确定性估计功能。这明确证明了外层温度概率如何影响NRLMSIS 2.0内的密度和温度曲线。另一项研究显示,相对于单独的NRLMSIS 2.0,迅速过冷的能力提高了,从而增强了它可以捕获的现象。
translated by 谷歌翻译
神经建筑搜索(NAS)已被广泛研究,并已成长为具有重大影响的研究领域。虽然经典的单目标NAS搜索具有最佳性能的体系结构,但多目标NAS考虑了应同时优化的多个目标,例如,将沿验证错误最小化资源使用率。尽管在多目标NAS领域已经取得了长足的进步,但我们认为实际关注的实际优化问题与多目标NAS试图解决的优化问题之间存在一些差异。我们通过将多目标NAS问题作为质量多样性优化(QDO)问题来解决这一差异,并引入了三种质量多样性NAS优化器(其中两个属于多重速度优化器组),以寻求高度多样化但多样化的体系结构对于特定于应用程序特定的利基,例如硬件约束。通过将这些优化器与它们的多目标对应物进行比较,我们证明了质量多样性总体上优于多目标NA在解决方案和效率方面。我们进一步展示了应用程序和未来的NAS研究如何在QDO上蓬勃发展。
translated by 谷歌翻译